284 research outputs found

    Sequential Convex Programming Methods for Solving Nonlinear Optimization Problems with DC constraints

    Full text link
    This paper investigates the relation between sequential convex programming (SCP) as, e.g., defined in [24] and DC (difference of two convex functions) programming. We first present an SCP algorithm for solving nonlinear optimization problems with DC constraints and prove its convergence. Then we combine the proposed algorithm with a relaxation technique to handle inconsistent linearizations. Numerical tests are performed to investigate the behaviour of the class of algorithms.Comment: 18 pages, 1 figur

    A Primal-Dual Algorithmic Framework for Constrained Convex Minimization

    Get PDF
    We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.Comment: This paper consists of 54 pages with 7 tables and 12 figure

    Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems

    Full text link
    In this paper, we develop two new randomized block-coordinate optimistic gradient algorithms to approximate a solution of nonlinear equations in large-scale settings, which are called root-finding problems. Our first algorithm is non-accelerated with constant stepsizes, and achieves O(1/k)\mathcal{O}(1/k) best-iterate convergence rate on E[∥Gxk∥2]\mathbb{E}[ \Vert Gx^k\Vert^2] when the underlying operator GG is Lipschitz continuous and satisfies a weak Minty solution condition, where E[⋅]\mathbb{E}[\cdot] is the expectation and kk is the iteration counter. Our second method is a new accelerated randomized block-coordinate optimistic gradient algorithm. We establish both O(1/k2)\mathcal{O}(1/k^2) and o(1/k2)o(1/k^2) last-iterate convergence rates on both E[∥Gxk∥2]\mathbb{E}[ \Vert Gx^k\Vert^2] and E[∥xk+1−xk∥2]\mathbb{E}[ \Vert x^{k+1} - x^{k}\Vert^2] for this algorithm under the co-coerciveness of GG. In addition, we prove that the iterate sequence {xk}\{x^k\} converges to a solution almost surely, and ∥Gxk∥2\Vert Gx^k\Vert^2 attains a o(1/k)o(1/k) almost sure convergence rate. Then, we apply our methods to a class of large-scale finite-sum inclusions, which covers prominent applications in machine learning, statistical learning, and network optimization, especially in federated learning. We obtain two new federated learning-type algorithms and their convergence rate guarantees for solving this problem class.Comment: 30 page
    • …
    corecore